Skip to content

Conversation

@Belerafon
Copy link

@Belerafon Belerafon commented May 28, 2025

Related GitHub Issue

Closes: #3621

Description

Why: Roo Code often emits API socket-timeout errors when using self-hosted, slow LLMs — either while reading a large file or even during the initial prompt phase. Yet these slow models can be usable in autonomous scenarios for working on some private code source. It would be great to have a user-tunable setting to configure the API timeout for self-hosted providers (OpenAI-compatible, Ollama, LMStudio, etc.). The timeout can be up to 30 minutes to allow big file chunks or prompts to be fully processed. This way, even an older laptop could produce meaningful results overnight with roo code.

What This PR adds a timeout option to the settings for LMStudio, Ollama, and OpenAI-Сompatible providers, and forwards this value into the OpenAI class constructor. Translations for all supported languages have been included.

Test Procedure

Connected an OpenAI-compatible client to a local slow LLM and tested timeout settings of 5, 10, and 20 minutes.

Observed socket-timeout errors and retries occurring according to the configured timeout.

Repeated manual testing for LMStudio provider with the same timeout values.

Type of Change

  • 🐛 Bug Fix: Non-breaking change that fixes an issue.
  • New Feature: Non-breaking change that adds functionality.
  • 💥 Breaking Change: Fix or feature that would cause existing functionality to not work as expected.
  • ♻️ Refactor: Code change that neither fixes a bug nor adds a feature.
  • 💅 Style: Changes that do not affect the meaning of the code (white-space, formatting, etc.).
  • 📚 Documentation: Updates to documentation files.
  • ⚙️ Build/CI: Changes to the build process or CI configuration.
  • 🧹 Chore: Other changes that don't modify src or test files.

Pre-Submission Checklist

  • Issue Linked: This PR is linked to an approved GitHub Issue (see "Related GitHub Issue" above).
  • Scope: My changes are focused on the linked issue (one major feature/fix per PR).
  • Self-Review: I have performed a thorough self-review of my code.
  • Code Quality:
    • My code adheres to the project's style guidelines.
    • There are no new linting errors or warnings (npm run lint).
    • All debug code (e.g., console.log) has been removed.
  • Testing:
    • New and/or updated tests have been added to cover my changes.
    • All tests pass locally (npm test).
    • The application builds successfully with my changes.
  • Branch Hygiene: My branch is up-to-date (rebased) with the main branch.
  • Documentation Impact: I have considered if my changes require documentation updates (see "Documentation Updates" section below).
  • Changeset: A changeset has been created using npm run changeset if this PR includes user-facing changes or dependency updates.
  • Contribution Guidelines: I have read and agree to the Contributor Guidelines.

Screenshots / Videos

Before
image

After
image

Documentation Updates

The UI guide should include:

Timeout for API requests to provider (min 5 min). If no response is received within this period, the request is retried. Increase this value for slower models.

Additional Notes

This is my first PR ever. So... do what you must.
Also, I am not a frontend developer, these code changes were done by LLM with my revisions.

Get in Touch


Important

Introduces customizable API timeout settings for self-hosted LLMs, updates UI components for user input, and adds translations in multiple languages.

  • Behavior:
    • Adds customizable API timeout setting for LMStudio, Ollama, and OpenAI-Compatible providers in provider-settings.ts.
    • Applies timeout setting in lm-studio.ts, ollama.ts, and openai.ts by converting minutes to milliseconds.
  • UI Components:
    • Updates LMStudio.tsx, Ollama.tsx, and OpenAICompatible.tsx to include input fields for API timeout settings.
  • Translations:
    • Adds translations for API timeout settings in multiple language files, including settings.json for ca, de, en, es, fr, hi, id, it, ja, ko, nl, pl, pt-BR, ru, tr, vi, zh-CN, and zh-TW.

This description was created by Ellipsis for 70e9ab6. You can customize this summary. It will automatically update as commits are pushed.

@Belerafon Belerafon requested review from cte and mrubens as code owners May 28, 2025 13:29
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. enhancement New feature or request labels May 28, 2025
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The label translation key here is settings:providers.openAiApiTimeout, but this is in the LMStudio component and the value comes from lmStudioApiTimeout. Consider renaming the translation key to be consistent (e.g., lmStudioApiTimeout).

Suggested change
<label className="block font-medium mb-1">{t("settings:providers.openAiApiTimeout")}</label>
<label className="block font-medium mb-1">{t("settings:providers.lmStudio.apiTimeout")}</label>

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the LMStudio, Ollama, and OpenAI-Compatible providers all use the OpenAI API library, the timeout name, description, and behavior are identical. Therefore, a common translation key is intentionally used for all three providers.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The placeholder text is using the translation key settings:placeholders.numbers.maxTokens, which doesn't seem appropriate for a timeout field. Consider using a key that matches the timeout setting for consistency.

Suggested change
placeholder={t("settings:placeholders.numbers.maxTokens")}
placeholder={t("settings:providers.openAiApiTimeout")}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The label is using the translation key settings:providers.openAiApiTimeout, but this file is for Ollama. It looks like a copy-paste error. Consider updating this to a key that reflects Ollama (e.g., settings:providers.ollamaApiTimeout).

Suggested change
<label className="block font-medium mb-1">{t("settings:providers.openAiApiTimeout")}</label>
<label className="block font-medium mb-1">{t("settings:providers.ollamaApiTimeout")}</label>

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo / copy-paste issue: The placeholder translation key in this field is set to settings:placeholders.numbers.maxTokens, which doesn't match the field's purpose (API timeout). Consider using a more appropriate translation key (e.g. one related to timeout) for clarity.

Suggested change
placeholder={t("settings:placeholders.numbers.maxTokens")}
placeholder={t("settings:placeholders.numbers.timeout")}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a valid feedback, would using openAiApiTimeout here make more sense?

@daniel-lxs daniel-lxs moved this from Triage to PR [Draft / In Progress] in Roo Code Roadmap May 28, 2025
@daniel-lxs
Copy link
Member

daniel-lxs commented May 28, 2025

Hey @Belerafon, can you take another look at the translations? It seems that you could use translations that are more relevant.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't the OpenAI class timeout in seconds though? https://github.com/openai/openai-python?tab=readme-ov-file#timeouts

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image
Node.js OpenAI library’s TypeScript definitions (see screenshot), the explicitly states:

“The maximum amount of time (in milliseconds) that the client should wait for a response from the server before timing out a single request.”

Moreover, the built-in default is set here:

timeout: options.timeout ?? 600000 /* 10 minutes */
—so here it’s definitely milliseconds, not seconds. May be a python library in seconds...

@Belerafon
Copy link
Author

Hey @Belerafon, can you take another look at the translations? It seems that you could use translations that are more relevant.

I’ve fixed the placeholder translation—it was clearly a bug. However, I’ve kept using
t("settings:providers.openAiApiTimeout") and
t("settings:providers.openAiApiTimeoutDescription")
for all three providers, since the texts are identical and they all rely on the same OpenAI library.

By the way, I’ve noticed an issue with the tests for my PR: while npm test passes locally on my Windows 11 host, the platform-unit-test (ubuntu-latest) job on GitHub is failing. I’m not entirely sure what’s going wrong and would appreciate any help.

@daniel-lxs
Copy link
Member

Hey @Belerafon, can you try rebasing your branch against main?

@Belerafon Belerafon force-pushed the API_Request_Timeout branch from 46ce635 to 9827fd3 Compare May 29, 2025 07:09
@dosubot dosubot bot added size:XL This PR changes 500-999 lines, ignoring generated files. and removed size:L This PR changes 100-499 lines, ignoring generated files. labels May 29, 2025
@Belerafon
Copy link
Author

Hey @Belerafon, can you try rebasing your branch against main?

Looks like I did it

@FrancoFun
Copy link

FrancoFun commented May 30, 2025

My guess is that you will also have to update the test. It probably fails because it doesn't expect a timeout parameter:

api/providers/__tests__/openai.spec.ts:101:30
it("should set default headers correctly", () => {
// Check that the OpenAI constructor was called with correct parame…

Adding:
timeout: expect.any(Number),
in openai.spec.ts will hopefully do the trick.

I don't see this specific test for Ollama and LMStudio, so they should be fine.

@Belerafon Belerafon force-pushed the API_Request_Timeout branch from 9827fd3 to 60c6926 Compare May 30, 2025 12:38
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. and removed size:XL This PR changes 500-999 lines, ignoring generated files. labels May 30, 2025
@vercel
Copy link

vercel bot commented May 30, 2025

@Belerafon is attempting to deploy a commit to the Roo Code Team on Vercel.

A member of the Team first needs to authorize it.

@Belerafon
Copy link
Author

Belerafon commented May 30, 2025

Adding: timeout: expect.any(Number), in openai.spec.ts will hopefully do the trick.

Thanks for your help, now the test passes! except [Vercel] - Requires authorization to deploy. I have no idea why I would need to deploy something somewhere with my small commit, but I guess it's beyond my capabilities.

@FrancoFun
Copy link

My guess is that you need to remove the "Draft / In Progress" label and assign the "Needs Preliminary Review" label. You can also check the "updated tests have been added to cover my changes." in the checklist in your first post.

@daniel-lxs
Copy link
Member

Hey @Belerafon you can ignore that vercel check, it's something we are setting up. Also you don't need to change the labels at all.

Copy link
Member

@daniel-lxs daniel-lxs May 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like the Hindi translation for openAiApiTimeoutDescription is still in English.

@daniel-lxs
Copy link
Member

Hey @Belerafon, I noticed that the timeout description mentions "min 5 min", but the input field accepts any positive number.

Should we add validation to enforce the 5-minute minimum? Or perhaps use 10 minutes as the minimum since that's the OpenAI library's default timeout?

@Belerafon
Copy link
Author

Hey @Belerafon, I noticed that the timeout description mentions "min 5 min", but the input field accepts any positive number.

Should we add validation to enforce the 5-minute minimum? Or perhaps use 10 minutes as the minimum since that's the OpenAI library's default timeout?

I actually discovered this limitation (minimum 5 minutes) with my manual local tests. I didn't find any official information about it. Maybe the best way would be to just remove this information (min 5 min) from the description. But someone could potentially report a bug trying to set 1 minute and it won't work.

@daniel-lxs
Copy link
Member

@Belerafon
I understand what if we change the input for a slider, and set the min value to 10, and maybe the max value to 120 minutes or something like that?

What do you think?

@Belerafon
Copy link
Author

Belerafon commented May 30, 2025

@Belerafon I understand what if we change the input for a slider, and set the min value to 10, and maybe the max value to 120 minutes or something like that?

What do you think?

A shorter timeout is helpful for cloud providers that normally reply quickly but occasionally hang indefinitely and require a retry.
Discussion #1658 request a way to introduce such timeouts (if I understood correctly).
If timeout can potentially be under default 10 minutes, it seems better no to constrain users to a higher minimum.

120 minutes isn't the best maximum either for my taste - I might want to run the new DeepSeek-R1-0528 on my laptop tonight and get something special and wonderful in the morning, but not Repeat 5...

@daniel-lxs
Copy link
Member

@Belerafon

I understand, just trying to set this up in a way where the user doesn't accidentally set the timeout to an invalid value.

@Belerafon
Copy link
Author

@Belerafon

I understand, just trying to set this up in a way where the user doesn't accidentally set the timeout to an invalid value.

In normal operation with standard providers, users can set any timeout value from 1 minute to infinity without affecting RooCode's functionality. In problematic cases, this timeout flexibility can be beneficial.

@Belerafon Belerafon force-pushed the API_Request_Timeout branch from fc449a0 to ab23de9 Compare May 31, 2025 05:58
@daniel-lxs daniel-lxs marked this pull request as draft June 3, 2025 23:36
@Belerafon
Copy link
Author

@daniel-lxs

I think I have implemented all fixes we discussed.

Could you please let me know if there’s anything else that needs to be addressed so that I can mark this PR as “Ready for review” and request approval from the code owners?

Thank you.

Belerafon and others added 7 commits June 19, 2025 16:06
Roo Code often emits API socket-timeout errors when using self-hosted, slow LLMs — either while reading a large file or even during the initial prompt phase. Yet these slow models can be usable in autonomous (e.g. overnight) scenarios for working on some private code source. It would be great to have a user-tunable setting to configure the API timeout for self-hosted providers (OpenAI-compatible, Ollama, LMStudio, etc.). The timeout can be up to 30 minutes to allow big file chunks or prompts to be fully processed. This way, even an older laptop could produce meaningful results overnight with roo code.
Update the test to verify that the OpenAI client is initialized with a timeout parameter,
while maintaining the existing test structure and assertions for other configuration options.
@Belerafon Belerafon force-pushed the API_Request_Timeout branch from ab23de9 to f3237a6 Compare June 19, 2025 13:25
@Belerafon Belerafon marked this pull request as ready for review June 19, 2025 13:50
@Belerafon Belerafon requested a review from jr as a code owner June 19, 2025 13:50
})(),
}}
className="w-full mt-4">
<label className="block font-medium mb-1">{t("settings:providers.openAiApiTimeout")}</label>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The translation key used in this label is "settings:providers.openAiApiTimeout", which seems inconsistent with the LMStudio component (e.g., other keys use "lmStudio"). This might be a typographical error—please check if the key should be corrected to match the LMStudio naming, such as "settings:providers.lmStudioApiTimeout".

This comment was generated because it violated a code review rule: irule_C0ez7Rji6ANcGkkX.

})(),
}}
className="w-full mt-4">
<label className="block font-medium mb-1">{t("settings:providers.openAiApiTimeout")}</label>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typographical issue: The label text key here is "settings:providers.openAiApiTimeout" but given that this configuration is for Ollama (using ollamaApiTimeout), it seems like it might be a mistake. Please verify if this should be "settings:providers.ollamaApiTimeout" instead.

Suggested change
<label className="block font-medium mb-1">{t("settings:providers.openAiApiTimeout")}</label>
<label className="block font-medium mb-1">{t("settings:providers.ollamaApiTimeout")}</label>

This comment was generated because it violated a code review rule: irule_C0ez7Rji6ANcGkkX.

<label className="block font-medium mb-1">{t("settings:providers.openAiApiTimeout")}</label>
</VSCodeTextField>
<div className="text-sm text-vscode-descriptionForeground -mt-2 mb-2">
{t("settings:providers.openAiApiTimeoutDescription")}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typographical issue: The description text key is "settings:providers.openAiApiTimeoutDescription" which may be unintended given the context of using Ollama. Confirm whether this key should refer to Ollama rather than OpenAI.

Suggested change
{t("settings:providers.openAiApiTimeoutDescription")}
{t("settings:providers.ollamaApiTimeoutDescription")}

This comment was generated because it violated a code review rule: irule_C0ez7Rji6ANcGkkX.

@daniel-lxs daniel-lxs marked this pull request as draft June 19, 2025 16:50
@hannesrudolph
Copy link
Collaborator

stale

@github-project-automation github-project-automation bot moved this from PR [Draft / In Progress] to Done in Roo Code Roadmap Jul 7, 2025
@github-project-automation github-project-automation bot moved this from New to Done in Roo Code Roadmap Jul 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request PR - Draft / In Progress size:L This PR changes 100-499 lines, ignoring generated files.

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

Local model API request fails if prompt ingestion takes more than 10 minutes

4 participants